posted 01-13-2012 07:05 AM
it is NDIThat's what they mean when they say "all positive, ignoring scores of zero."
Again, there is no published research on this technique.
However, Honts told me that he and Raskin had completed studies on this technique but they were not published.
What they seem to have known is that there is a proportion of truthful persons who will produced non-positive subtotal scores in the context of several event-specific test questions.
Because the test is imperfect, with accuracy presumably averaging somewhere over 90%, there is always the possibility of error - the possibility that a truthful persons produces a sufficiently negative grand-total or sub-total score to merit a deceptive test result. That possibility seems to be a little less than 10%.
There is also the possibility (among other possibilities) that a truthful person will produce an inconclusive score. If we employ subtotal requirements in the decision rule for truthful (and inconclusive results), then we are actually allowing sub-total scores (which are less accurate than grand-total score) to control inconclusive results. In fact, because there are more sub-totals than grand-totals (duh!) and because the probability of inconclusive accumulates with the number of questions/decisions, the sub-totals actually do more to mediate inconclusive results than does the grand total.
Senter (2003) showed that two-stage rules can increase the accuracy of MGQT multi-facet exams compared to the spot-score-rules. The meaning of this - consistent with earlier studies on the mgqt - is that there is no support for the hypothesis that the criterion variance of multi-facet questions is actually independent. Instead the evidence, in 1989, 1993 and 2003, seems to suggest the opposite - that the criterion variance of multi-facet investigative questions, regarding a single known or alleged incident, is non-independent.
Mark and I studied the data we have regarding single-issue Federal ZCT exams - for which people argue different opinions about whether it is a single issue or multi-facet exams.
The correct question and argument is not whether the exam is single-issue or multi-facet, but whether the criterion variance (whatevever are the external facts/circumstances/behaviors that cause a person to be deceptive or truthful to each individual question) will or will not affect the criterion state of the other questions.
Questions are independent only if those external facts/behaviors/circumstances affect only individual questions and DO NOT affect other questions.
What matters most here is not someone's opinion but the answer that the evidence gives us. There are many situations in many different fields when opinion and evidence disagree - in which case one of them must be up-graded. (Hint: it's not OK to change the evidence, and it's also not OK to ignore the evidence.)
In looking at nearly 30 different samples (not 30 cases, 30 samples of cases) of Federal ZCT exams we found that more than 1/2 (>50%) of truthful persons produced a non-positive (zero or lower) sub-total score.
So, it is no surprise that the two-stage rules described by Senter & Dollins have become important. These rules place primary emphasis on the grand-total (big number), and secondary emphasis on the sub-totals (small numbers).
Is it beleivable that more than 1/2 of truthful people may not pass a polygraph? We think so - under some circumstances. Look back at the most recently published Federal study that made use of Federal ZCT decision rules. Blackwell (1999) reported test specificity to truthtelling at rates that were less than chance. Studies since then have consistently shown that improved decision rules (two-stage) can improve overall test accuracy.
Still, Federal rules have their place of usefulness if you want to be exceedingly cautious and do not care about clearing the truthful person.
Anyway, Raskin and the Utah folks probably found similar evidence, and seem to have formulated rules that effectively manage the complex variance of multi-facet exams of known events. For ZCT exams the Utah folks simply use the grand-total decision rules - which avoids the problem altogether. (BTW, the grand-total rule consistently gives the highest level of test accuracy for nearly all techniques.)
All of this is the reason I say that the Utah MGQT decision rules, although somewhat complex, are actually correctly formulated.
With the ESS, these complexities are managed through the use of Bonferonni correct alpha cutscores (and two-stage rules) for results based on sub-total scores of ZCT exams, and through the use of Sidak corrected cutscores for truthful results when using the spot-score rule. This achieves the same results using procedures that are simpler and more intuitive for field examiners.
'mornin ya'll
r
------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)